iT邦幫忙

2025 iThome 鐵人賽

DAY 9
0
AI & Data

從0開始的MLFLOW應用搭建系列 第 9

Day 09 – 模型比較報告:數字 + 推薦清單

  • 分享至 

  • xImage
  •  

🎯 背景與目標說明

昨天(Day 8),我們已經比較了 User-based CF (KNN)Item-based TF-IDF 兩個 baseline 模型,並記錄了 Precision/Recall。

👉 但是推薦系統光看數字還不夠,我們還需要:

  • 清單對照:知道哪些動畫被推薦出來
  • 使用者角度:看到測試集中實際喜歡的動畫,對照模型的推薦

今天,我們要在比較報告裡同時呈現這些資訊,並存成 artifact,方便在 MLflow UI 中檢視。


⚙️ 詳細步驟與程式碼範例

建議放在 notebooks/day9_model_report.ipynb


1. 載入資料

import os
import pandas as pd
import numpy as np
import mlflow
from sklearn.neighbors import NearestNeighbors
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.metrics.pairwise import linear_kernel

DATA_DIR = "/usr/mlflow/data"

anime = pd.read_csv(os.path.join(DATA_DIR, "anime_clean.csv"))
ratings_train = pd.read_csv(os.path.join(DATA_DIR, "ratings_train.csv"))
ratings_test = pd.read_csv(os.path.join(DATA_DIR, "ratings_test.csv"))

print("Anime:", anime.shape)
print("Train:", ratings_train.shape)
print("Test:", ratings_test.shape)

2. User-based CF (KNN 版本) (跟昨天一樣)

# 建立 user-item 矩陣
user_item_matrix = ratings_train.pivot_table(
    index="user_id", columns="anime_id", values="rating"
).fillna(0)

knn = NearestNeighbors(metric="cosine", algorithm="brute", n_neighbors=6, n_jobs=-1)
knn.fit(user_item_matrix)

def recommend_user_based(user_id, top_n=10):
    if user_id not in user_item_matrix.index:
        return pd.DataFrame(columns=["anime_id", "name", "genre"])
    
    user_vector = user_item_matrix.loc[[user_id]]
    distances, indices = knn.kneighbors(user_vector, n_neighbors=6)
    neighbor_ids = user_item_matrix.index[indices.flatten()[1:]]  # 排除自己
    
    neighbor_ratings = user_item_matrix.loc[neighbor_ids]
    mean_scores = neighbor_ratings.mean().sort_values(ascending=False)
    
    seen = user_item_matrix.loc[user_id]
    seen = seen[seen > 0].index
    recommendations = mean_scores.drop(seen).head(top_n)
    
    return anime[anime["anime_id"].isin(recommendations.index)][["anime_id","name","genre"]]

3. Item-based TF-IDF (跟昨天一樣)

anime["text"] = anime["genre"].fillna("") + " " + anime["type"].fillna("")

tfidf = TfidfVectorizer(stop_words="english")
tfidf_matrix = tfidf.fit_transform(anime["text"])
cosine_sim = linear_kernel(tfidf_matrix, tfidf_matrix)
indices = pd.Series(anime.index, index=anime["anime_id"]).drop_duplicates()

def recommend_item_based(user_id, top_n=10):
    user_ratings = ratings_train[ratings_train["user_id"] == user_id]
    liked = user_ratings[user_ratings["rating"] > 7]["anime_id"].tolist()
    
    if len(liked) == 0:
        return pd.DataFrame(columns=["anime_id","name"])
    
    sim_scores = np.zeros(cosine_sim.shape[0])
    for anime_id in liked:
        if anime_id in indices:
            idx = indices[anime_id]
            sim_scores += cosine_sim[idx]
    
    sim_scores = sim_scores / len(liked)
    sim_indices = sim_scores.argsort()[::-1]
    
    seen = set(user_ratings["anime_id"])
    rec_ids = [anime.loc[i, "anime_id"] for i in sim_indices if anime.loc[i, "anime_id"] not in seen][:top_n]
    
    return anime[anime["anime_id"].isin(rec_ids)][["anime_id","name","genre"]]

4. Precision/Recall 函數 (跟昨天一樣)

def precision_recall_at_k(user_id, model_func, k=10):
    recs = model_func(user_id, top_n=k)
    if recs.empty:
        return np.nan, np.nan
    
    user_test = ratings_test[ratings_test["user_id"] == user_id]
    liked = set(user_test[user_test["rating"] > 7]["anime_id"])
    if len(liked) == 0:
        return np.nan, np.nan
    
    hit = len(set(recs["anime_id"]) & liked)
    
    precision = hit / k
    recall = hit / len(liked)
    return precision, recall

5. 評估並產生比較表 (跟昨天一樣)

sample_users = np.random.choice(ratings_train["user_id"].unique(), 300, replace=False)

def evaluate_model(model_func, name):
    precisions, recalls = [], []
    for u in sample_users:
        p, r = precision_recall_at_k(u, model_func, 10)
        if not np.isnan(p): precisions.append(p)
        if not np.isnan(r): recalls.append(r)
    return {
        "model": name,
        "precision@10": np.mean(precisions),
        "recall@10": np.mean(recalls)
    }

results = []
results.append(evaluate_model(recommend_user_based, "user_based_cf"))
results.append(evaluate_model(recommend_item_based, "item_based_tfidf"))

df_compare = pd.DataFrame(results)
print(df_compare)

輸出範例:

model precision@10 recall@10
user_based_cf 0.0046 0.0159
item_based_tfidf 0.0051 0.0165

6. 推薦清單範例報告

report_users = ratings_test["user_id"].drop_duplicates().sample(3, random_state=42)

records = []
for u in report_users:
    liked = ratings_test[(ratings_test["user_id"] == u) & (ratings_test["rating"] > 7)]
    liked_names = anime[anime["anime_id"].isin(liked["anime_id"])]["name"].tolist()
    
    recs_u = recommend_user_based(u, top_n=10)["name"].tolist()
    recs_i = recommend_item_based(u, top_n=10)["name"].tolist()
    
    records.append({
        "user_id": u,
        "liked_in_test": liked_names,
        "user_based_recommend": recs_u,
        "item_based_recommend": recs_i
    })

df_examples = pd.DataFrame(records)
print(df_examples)

輸出會長得像這樣:

https://ithelp.ithome.com.tw/upload/images/20250922/201786266rViYOwW9E.png


7. 紀錄到 MLflow

mlflow.set_tracking_uri(os.getenv("MLFLOW_TRACKING_URI", "http://mlflow:5000"))
mlflow.set_experiment("anime-recommender-report")

# 存 CSV
df_compare.to_csv("day9_compare_metrics.csv", index=False)
df_examples.to_csv("day9_recommend_examples.csv", index=False)

with mlflow.start_run(run_name="day9_model_report"):
    # log metrics
    for row in results:
        mlflow.log_metric(f"{row['model']}_precision@10", row["precision@10"])
        mlflow.log_metric(f"{row['model']}_recall@10", row["recall@10"])
    
    # log artifacts
    mlflow.log_artifact("day9_compare_metrics.csv", artifact_path="reports")
    mlflow.log_artifact("day9_recommend_examples.csv", artifact_path="reports")

https://ithelp.ithome.com.tw/upload/images/20250922/20178626tarn5orif1.png


📊 流程概覽(純文字)

ratings_train.csv + ratings_test.csv
        │
        ▼
User-based CF → Precision/Recall
Item-based TF-IDF → Precision/Recall
        │
        ▼
數值比較表 (CSV) + 推薦清單範例 (CSV)
        │
        ▼
MLflow log metrics + log_artifact

✅ 重點總結

  • Day 9 的核心在於 不只記錄數字,也記錄推薦清單
  • MLflow 中的 metrics:Precision/Recall。
  • MLflow 中的 artifacts:比較表 (CSV) + 推薦清單範例 (CSV)。
  • 這樣能幫助我們更直觀地了解不同模型的差異,而不只是盯著指標。

🔮 延伸思考(預告)

Day 10,我們將把這些實驗封裝成 MLflow Project,用 MLproject + conda.yaml,讓團隊或 CI/CD pipeline 可以一鍵重現完整流程。


上一篇
Day 08 – 比較模型並記錄差異
下一篇
Day 10 – MLflow Projects:封裝與本地環境執行
系列文
從0開始的MLFLOW應用搭建23
圖片
  熱門推薦
圖片
{{ item.channelVendor }} | {{ item.webinarstarted }} |
{{ formatDate(item.duration) }}
直播中

尚未有邦友留言

立即登入留言